28 research outputs found

    Non-uniform Feature Sampling for Decision Tree Ensembles

    Full text link
    We study the effectiveness of non-uniform randomized feature selection in decision tree classification. We experimentally evaluate two feature selection methodologies, based on information extracted from the provided dataset: (i)(i) \emph{leverage scores-based} and (ii)(ii) \emph{norm-based} feature selection. Experimental evaluation of the proposed feature selection techniques indicate that such approaches might be more effective compared to naive uniform feature selection and moreover having comparable performance to the random forest algorithm [3]Comment: 7 pages, 7 figures, 1 tabl

    Low Rank Matrix-Valued Chernoff Bounds and Approximate Matrix Multiplication

    Full text link
    In this paper we develop algorithms for approximating matrix multiplication with respect to the spectral norm. Let A\in{\RR^{n\times m}} and B\in\RR^{n \times p} be two matrices and \eps>0. We approximate the product A^\top B using two down-sampled sketches, \tilde{A}\in\RR^{t\times m} and \tilde{B}\in\RR^{t\times p}, where t\ll n such that \norm{\tilde{A}^\top \tilde{B} - A^\top B} \leq \eps \norm{A}\norm{B} with high probability. We use two different sampling procedures for constructing \tilde{A} and \tilde{B}; one of them is done by i.i.d. non-uniform sampling rows from A and B and the other is done by taking random linear combinations of their rows. We prove bounds that depend only on the intrinsic dimensionality of A and B, that is their rank and their stable rank; namely the squared ratio between their Frobenius and operator norm. For achieving bounds that depend on rank we employ standard tools from high-dimensional geometry such as concentration of measure arguments combined with elaborate \eps-net constructions. For bounds that depend on the smaller parameter of stable rank this technology itself seems weak. However, we show that in combination with a simple truncation argument is amenable to provide such bounds. To handle similar bounds for row sampling, we develop a novel matrix-valued Chernoff bound inequality which we call low rank matrix-valued Chernoff bound. Thanks to this inequality, we are able to give bounds that depend only on the stable rank of the input matrices...Comment: 15 pages, To appear in 22nd ACM-SIAM Symposium on Discrete Algorithms (SODA 2011

    Approximate Matrix Multiplication with Application to Linear Embeddings

    Full text link
    In this paper, we study the problem of approximately computing the product of two real matrices. In particular, we analyze a dimensionality-reduction-based approximation algorithm due to Sarlos [1], introducing the notion of nuclear rank as the ratio of the nuclear norm over the spectral norm. The presented bound has improved dependence with respect to the approximation error (as compared to previous approaches), whereas the subspace -- on which we project the input matrices -- has dimensions proportional to the maximum of their nuclear rank and it is independent of the input dimensions. In addition, we provide an application of this result to linear low-dimensional embeddings. Namely, we show that any Euclidean point-set with bounded nuclear rank is amenable to projection onto number of dimensions that is independent of the input dimensionality, while achieving additive error guarantees.Comment: 8 pages, International Symposium on Information Theor

    Hidden cliques and the certification of the restricted isometry property

    Get PDF
    International audienceCompressed sensing is a technique for finding sparse solutions to underdetermined linear systems. This technique relies on properties of the sensing matrix such as the restricted isometry property. Sensing matrices that satisfy this property with optimal parameters are mainly obtained via probabilistic arguments. Deciding whether a given matrix satisfies the restricted isometry property is a non-trivial computational problem. Indeed, we show in this paper that restricted isometry parameters cannot be approximated in polynomial time within any constant factor under the assumption that the hidden clique problem is hard. Moreover, on the positive side we propose an improvement on the brute-force enumeration algorithm for checking the restricted isometry property

    Randomized Dimensionality Reduction for k-means Clustering

    Full text link
    We study the topic of dimensionality reduction for kk-means clustering. Dimensionality reduction encompasses the union of two approaches: \emph{feature selection} and \emph{feature extraction}. A feature selection based algorithm for kk-means clustering selects a small subset of the input features and then applies kk-means clustering on the selected features. A feature extraction based algorithm for kk-means clustering constructs a small set of new artificial features and then applies kk-means clustering on the constructed features. Despite the significance of kk-means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for kk-means clustering are not known. On the other hand, two provably accurate feature extraction methods for kk-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress towards a better understanding of dimensionality reduction for kk-means clustering. Namely, we present the first provably accurate feature selection method for kk-means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal kk-means objective value.Comment: IEEE Transactions on Information Theory, to appea

    Randomized Block Kaczmarz Method with Projection for Solving Least Squares

    Get PDF
    The Kaczmarz method is an iterative method for solving overcomplete linear systems of equations Ax = b. The randomized version of the Kaczmarz method put forth by Strohmer and Vershynin iteratively projects onto a randomly chosen solution space given by a single row of the matrix A and converges exponentially in expectation to the solution of a consistent system. In this paper we analyze two block versions of the method each with a randomized projection, that converge in expectation to the least squares solution of inconsistent systems. Our approach utilizes a paving of the matrix A to guarantee exponential convergence, and suggests that paving yields a significant improvement in performance in certain regimes. The proposed method is an extension of the block Kaczmarz method analyzed by Needell and Tropp and the Randomized Extended Kaczmarz method of Zouzias and Freris. The contribution is thus two-fold; unlike the standard Kaczmarz method, our methods converge to the leas-squares solution of inconsistent systems, and by using appropriate blocks of the matrix this convergence can be significantly accelerated. Numerical experiments suggest that the proposed algorithm can indeed lead to advantages in practice
    corecore